256 research outputs found

    Classifying Cognitive Profiles Using Machine Learning with Privileged Information in Mild Cognitive Impairment

    Get PDF
    Early diagnosis of dementia is critical for assessing disease progression and potential treatment. State-or-the-art machine learning techniques have been increasingly employed to take on this diagnostic task. In this study, we employed Generalized Matrix Learning Vector Quantization (GMLVQ) classifiers to discriminate patients with Mild Cognitive Impairment (MCI) from healthy controls based on their cognitive skills. Further, we adopted a "Learning with privileged information" approach to combine cognitive and fMRI data for the classification task. The resulting classifier operates solely on the cognitive data while it incorporates the fMRI data as privileged information (PI) during training. This novel classifier is of practical use as the collection of brain imaging data is not always possible with patients and older participants. MCI patients and healthy age-matched controls were trained to extract structure from temporal sequences. We ask whether machine learning classifiers can be used to discriminate patients from controls and whether differences between these groups relate to individual cognitive profiles. To this end, we tested participants in four cognitive tasks: working memory, cognitive inhibition, divided attention, and selective attention. We also collected fMRI data before and after training on a probabilistic sequence learning task and extracted fMRI responses and connectivity as features for machine learning classifiers. Our results show that the PI guided GMLVQ classifiers outperform the baseline classifier that only used the cognitive data. In addition, we found that for the baseline classifier, divided attention is the only relevant cognitive feature. When PI was incorporated, divided attention remained the most relevant feature while cognitive inhibition became also relevant for the task. Interestingly, this analysis for the fMRI GMLVQ classifier suggests that (1) when overall fMRI signal is used as inputs to the classifier, the post-training session is most relevant; and (2) when the graph feature reflecting underlying spatiotemporal fMRI pattern is used, the pre-training session is most relevant. Taken together these results suggest that brain connectivity before training and overall fMRI signal after training are both diagnostic of cognitive skills in MCI.PT and YS were supported by EPSRC grant no EP/L000296/1 “Personalized Medicine through Learning in the Model Space.” This work was supported by grants to ZK from the Biotechnology and Biological Sciences Research Council (H012508), the Leverhulme Trust (RF-2011-378), and the (European Community's) Seventh Framework Programme (FP7/2007-2013) under agreement PITN-GA-2011-290011

    Learning predictive statistics: strategies and brain mechanisms

    Get PDF
    When immersed in a new environment we are challenged to decipher initially incomprehensible streams of sensory information. Yet, quite rapidly, the brain finds structure and meaning in these incoming signals, helping us to predict and prepare ourselves for future actions. This skill relies on extracting the statistics of event streams in the environment that contain regularities of variable complexity: from simple repetitive patterns to complex probabilistic combinations. Here, we test the brain mechanisms that mediate our ability to adapt to the environment's statistics and predict upcoming events. By combining behavioral training and multi-session fMRI in human participants (male and female), we track the cortico-striatal mechanisms that mediate learning of temporal sequences as they change in structure complexity. We show that learning of predictive structures relates to individual decision strategy; that is, selecting the most probable outcome in a given context (maximizing) vs. matching the exact sequence statistics. These strategies engage distinct human brain regions: maximizing engages dorsolateral prefrontal, cingulate, sensory-motor regions and basal ganglia (dorsal caudate, putamen), while matching engages occipito-temporal regions (including the hippocampus) and basal ganglia (ventral caudate). Our findings provide evidence for distinct cortico-striatal mechanisms that facilitate our ability to extract behaviorally-relevant statistics to make predictions.SIGNIFICANCE STATEMENTMaking predictions about future events relies on interpreting streams of information that may initially appear incomprehensible. Past work has studied how humans identify repetitive patterns and associative pairings. However, the natural environment contains regularities that vary in complexity: from simple repetition to complex probabilistic combinations. Here, we combine behavior and multi-session fMRI to track the brain mechanisms that mediate our ability to adapt to changes in the environment's statistics. We provide evidence for an alternate route for learning complex temporal statistics: extracting the most probable outcome in a given context is implemented by interactions between executive and motor cortico-striatal mechanisms compared to visual cortico-striatal circuits (including hippocampal cortex) that support learning of the exact temporal statistics.This work was supported by grants to PT from Engineering and Physical Sciences Research Council [EP/ L000296/1], ZK from the Biotechnology and Biological Sciences Research Council [H012508], the Leverhulme Trust [RF-2011-378] and the [European Community's] Seventh Framework Programme [FP7/2007-2013] under agreement PITN-GA 2011-290011, AEW from the Wellcome Trust (095183/Z/10/Z)

    Learning predictive statistics from temporal sequences: Dynamics and strategies

    Get PDF
    Human behavior is guided by our expectations about the future. Often, we make predictions by monitoring how event sequences unfold, even though such sequences may appear incomprehensible. Event structures in the natural environment typically vary in complexity: from simple repetition to complex probabilistic combinations. How do we learn these structures? Here we investigate the dynamics of structure learning by tracking human responses to temporal sequences that change in structure unbeknownst to the participants. Participants were asked to predict the upcoming item following a probabilistic sequence of symbols. Using a Markov process, we created a family of sequences: from simple frequency statistics (e.g., some symbols are more probable than others) to context-based statistics (e.g., symbol probability is contingent on preceding symbols). We demonstrate the dynamics with which individuals adapt to changes in the environment’s statistics; that is, they extract the behaviorally-relevant structures to make predictions about upcoming events. Further, we show that this structure learning relates to individual decision strategy; faster learning of complex structures relates to selecting the most probable outcome in a given context (maximizing) rather than matching the exact sequence statistics. Our findings provide evidence for alternate routes to learning of behaviorally-relevant statistics that facilitate our ability to predict future events in variable environments.This work was supported by grants to PT from the Engineering and Physical Sciences Research Council (EP/L000296/1); to ZK from the Biotechnology and Biological Sciences Research Council (H012508), the Leverhulme Trust (RF-2011-378), and the European Community's Seventh Framework Programme (FP7/2007-2013) under agreement PITN-GA-2011-290011; and to AW from the Wellcome Trust (095183/Z/10/Z)

    Separate cortical stages in amodal completion revealed by functional magnetic resonance adaptation

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Objects in our environment are often partly occluded, yet we effortlessly perceive them as whole and complete. This phenomenon is called visual amodal completion. Psychophysical investigations suggest that the process of completion starts from a representation of the (visible) physical features of the stimulus and ends with a completed representation of the stimulus. The goal of our study was to investigate both stages of the completion process by localizing both brain regions involved in processing the physical features of the stimulus as well as brain regions representing the completed stimulus.</p> <p>Results</p> <p>Using fMRI adaptation we reveal clearly distinct regions in the visual cortex of humans involved in processing of amodal completion: early visual cortex – presumably V1 -processes the local contour information of the stimulus whereas regions in the inferior temporal cortex represent the completed shape. Furthermore, our data suggest that at the level of inferior temporal cortex information regarding the original local contour information is not preserved but replaced by the representation of the amodally completed percept.</p> <p>Conclusion</p> <p>These findings provide neuroimaging evidence for a multiple step theory of amodal completion and further insights into the neuronal correlates of visual perception.</p

    Bringing the real world into the fMRI scanner: Repetition effects for pictures versus real objects

    Get PDF
    Our understanding of the neural underpinnings of perception is largely built upon studies employing 2-dimensional (2D) planar images. Here we used slow event-related functional imaging in humans to examine whether neural populations show a characteristic repetition-related change in haemodynamic response for real-world 3-dimensional (3D) objects, an effect commonly observed using 2D images. As expected, trials involving 2D pictures of objects produced robust repetition effects within classic object-selective cortical regions along the ventral and dorsal visual processing streams. Surprisingly, however, repetition effects were weak, if not absent on trials involving the 3D objects. These results suggest that the neural mechanisms involved in processing real objects may therefore be distinct from those that arise when we encounter a 2D representation of the same items. These preliminary results suggest the need for further research with ecologically valid stimuli in other imaging designs to broaden our understanding of the neural mechanisms underlying human vision

    Distinct Visual Working Memory Systems for View-Dependent and View-Invariant Representation

    Get PDF
    Background: How do people sustain a visual representation of the environment? Currently, many researchers argue that a single visual working memory system sustains non-spatial object information such as colors and shapes. However, previous studies tested visual working memory for two-dimensional objects only. In consequence, the nature of visual working memory for three-dimensional (3D) object representation remains unknown. Methodology/Principal Findings: Here, I show that when sustaining information about 3D objects, visual working memory clearly divides into two separate, specialized memory systems, rather than one system, as was previously thought. One memory system gradually accumulates sensory information, forming an increasingly precise view-dependent representation of the scene over the course of several seconds. A second memory system sustains view-invariant representations of 3D objects. The view-dependent memory system has a storage capacity of 3–4 representations and the view-invariant memory system has a storage capacity of 1–2 representations. These systems can operate independently from one another and do not compete for working memory storage resources. Conclusions/Significance: These results provide evidence that visual working memory sustains object information in two separate, specialized memory systems. One memory system sustains view-dependent representations of the scene, akin to the view-specific representations that guide place recognition during navigation in humans, rodents and insects. Th
    • 

    corecore